Optimality and Complexity for Constrained Optimization Problems with Nonconvex Regularization
نویسندگان
چکیده
In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz Lp penalties as special cases. Using the theory of the generalized directional derivative and the tangent cone, we derive a first order necessary optimality condition for local minimizers of the problem, and define the generalized stationary point of it. We show that the generalized stationary point is the Clarke stationary point when the objective function is Lipschitz continuous at this point, and satisfies the existing necessary optimality conditions when the objective function is not Lipschitz continuous at this point. Moreover, we prove the consistency between the generalized directional derivative and the limit of the classic directional derivatives associated with the smoothing function. Finally, we establish a lower bound property for every local minimizer and show that finding a global minimizer is strongly NP-hard when the objective function has a concave regularizer.
منابع مشابه
Optimality conditions for approximate solutions of vector optimization problems with variable ordering structures
We consider nonconvex vector optimization problems with variable ordering structures in Banach spaces. Under certain boundedness and continuity properties we present necessary conditions for approximate solutions of these problems. Using a generic approach to subdifferentials we derive necessary conditions for approximate minimizers and approximately minimal solutions of vector optimizatio...
متن کاملSample Complexity of Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization
The popular cubic regularization (CR) method converges with firstand second-order optimality guarantee for nonconvex optimization, but encounters a high sample complexity issue for solving large-scale problems. Various sub-sampling variants of CR have been proposed to improve the sample complexity. In this paper, we propose a stochastic variance-reduced cubic-regularized (SVRC) Newton’s method ...
متن کاملDifference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
This paper studies a fundamental bicriteria optimization problem for variable selection in statistical learning; the two criteria are a loss/residual function and a model control (also called regularization, penalty). The former function measures the fitness of the learning model to data and the latter function is employed as a control of the complexity of the model. We focus on the case where ...
متن کاملLinearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization
In this paper, we consider a wide class of constrained nonconvex regularized minimization problems, where the constraints are linearly constraints. It was reported in the literature that nonconvex regularization usually yields a solution with more desirable sparse structural properties beyond convex ones. However, it is not easy to obtain the proximal mapping associated with nonconvex regulariz...
متن کاملLANCS Workshop on Modelling and Solving Complex Optimisation Problems
Towards optimal Newton-type methods for nonconvex smooth optimization Coralia Cartis Coralia.Cartis (at) ed.ac.uk School of Mathematics, Edinburgh University We show that the steepest-descent and Newton methods for unconstrained non-convex optimization, under standard assumptions, may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent’s global...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Math. Oper. Res.
دوره 42 شماره
صفحات -
تاریخ انتشار 2017